108 research outputs found

    Inferring Interpersonal Relations in Narrative Summaries

    Full text link
    Characterizing relationships between people is fundamental for the understanding of narratives. In this work, we address the problem of inferring the polarity of relationships between people in narrative summaries. We formulate the problem as a joint structured prediction for each narrative, and present a model that combines evidence from linguistic and semantic features, as well as features based on the structure of the social community in the text. We also provide a clustering-based approach that can exploit regularities in narrative types. e.g., learn an affinity for love-triangles in romantic stories. On a dataset of movie summaries from Wikipedia, our structured models provide more than a 30% error-reduction over a competitive baseline that considers pairs of characters in isolation

    Regulation of Alteration/Deficiency in Activation 3 (ADA3) by Acetylation and its Role in Cell Cycle Regulation and Oncogenesis

    Get PDF
    The ADA3 (Alteration/Deficiency in Activation 3) protein is a transcriptional adaptor protein that was initially discovered as a component of several HAT (Histone Acetyltransferase) complexes, the enzyme complex responsible for histone acetylation, which is a prerequisite for transcription. Earlier the studies from Dr. Band’s laboratory and that of others’ have deciphered a crucial role of ADA3 in cell cycle regulation (both through G1/S and G2/M phase transitions) and in maintaining the genomic stability. While our laboratory investigated the mechanism behind the role of ADA3 in G1/S transition, the same remained unknown for G2/M phase transition. Based on this prior knowledge about ADA3, I started out my Ph.D. thesis work in Dr. Band’s laboratory directed towards examining the role of ADA3 in mitosis. During my doctoral research, I demonstrated that ADA3 governs the recruitment of a key centromeric protein CENP-B on to the centromeres and regulates the chromosome segregation during mitosis. ADA3 protein has the potential to undergo posttranslational modification, including acetylation, and in the course of my Ph.D. research, I became interested in how these modifications might regulate the function of ADA3. I showed that ADA3 acetylation is regulated by coordinated actions of its associated HATs, GCN5, PCAF and p300, and a new partner I discovered, the deacetylase SIRT1. We used mass-spectrometry and site-directed mutagenesis to identify major sites of ADA3 acetylated by GCN5 and p300 and found that acetylation defective mutants were capable of interacting with HATs and other components of HAT complexes but deficient in their ability to restore ADA3-dependent global or locus-specific histone acetylation marks and cell proliferation in Ada3 deleted MEFs. A parallel focus of my studies was to define the role of ADA3 in HER2+ breast cancers, which basically emanates from a clinical study from our laboratory that revealed that ADA3 is overexpressed/mislocalized in these types of aggressive tumors. By using cell culture models I have established a link between ADA3 and HER2 signaling pathways. In these cell lines, I found that ADA3 is a downstream target of HER2 and discovered a novel phospho-AKT-phospho-p300-Ac-ADA3 signaling pathway. Importantly, ADA3 knockdown in these cells recapitulates the cell cycle inhibitory effects of a tyrosine kinase inhibitor lapatinib such as accumulation of CDK inhibitor p27 and reduced mitotic index. Taken together these results highlight the importance of ADA3 as a marker for treatment efficacy and a promising therapeutic target. Given the key importance of ADA3-containing HAT complexes in the regulation of various biological processes, including cell cycle, my thesis work provides an insight for the regulation of the function of these complexes through dynamic ADA3 acetylation

    Fuse to Forget: Bias Reduction and Selective Memorization through Model Fusion

    Full text link
    Model fusion research aims to aggregate the knowledge of multiple models to enhance performance by combining their weights. In this work, we study the inverse, investigating whether and how can model fusion interfere and reduce unwanted knowledge. We delve into the effects of model fusion on the evolution of learned shortcuts, social biases, and memorization capabilities in fine-tuned language models. Through several experiments covering text classification and generation tasks, our analysis highlights that shared knowledge among models is usually enhanced during model fusion, while unshared knowledge is usually lost or forgotten. Based on this observation, we demonstrate the potential of model fusion as a debiasing tool and showcase its efficacy in addressing privacy concerns associated with language models.Comment: 16 pages, 9 figures, 6 table

    Discovering User-Interpretable Capabilities of Black-Box Planning Agents

    Full text link
    Several approaches have been developed for answering users' specific questions about AI behavior and for assessing their core functionality in terms of primitive executable actions. However, the problem of summarizing an AI agent's broad capabilities for a user is comparatively new. This paper presents an algorithm for discovering from scratch the suite of high-level "capabilities" that an AI system with arbitrary internal planning algorithms/policies can perform. It computes conditions describing the applicability and effects of these capabilities in user-interpretable terms. Starting from a set of user-interpretable state properties, an AI agent, and a simulator that the agent can interact with, our algorithm returns a set of high-level capabilities with their parameterized descriptions. Empirical evaluation on several game-based scenarios shows that this approach efficiently learns descriptions of various types of AI agents in deterministic, fully observable settings. User studies show that such descriptions are easier to understand and reason with than the agent's primitive actions.Comment: KR 202

    A Comparison of Lexicon-Based and ML-Based Sentiment Analysis: Are There Outlier Words?

    Full text link
    Lexicon-based approaches to sentiment analysis of text are based on each word or lexical entry having a pre-defined weight indicating its sentiment polarity. These are usually manually assigned but the accuracy of these when compared against machine leaning based approaches to computing sentiment, are not known. It may be that there are lexical entries whose sentiment values cause a lexicon-based approach to give results which are very different to a machine learning approach. In this paper we compute sentiment for more than 150,000 English language texts drawn from 4 domains using the Hedonometer, a lexicon-based technique and Azure, a contemporary machine-learning based approach which is part of the Azure Cognitive Services family of APIs which is easy to use. We model differences in sentiment scores between approaches for documents in each domain using a regression and analyse the independent variables (Hedonometer lexical entries) as indicators of each word's importance and contribution to the score differences. Our findings are that the importance of a word depends on the domain and there are no standout lexical entries which systematically cause differences in sentiment scores.Comment: 4 pages, to appear in Proceedings of the 31st Irish Conference on Artificial Intelligence and Cognitive Science. December 7th-8th, 202

    Production and Assesment of Compost as A Bio-Fertilizer Using Kitchen Waste From NIT Rourkela Hostels and Study of Comparative Plant Growth

    Get PDF
    This research project deals with the production of compost generated from kitchen waste of NIT Rourkela and its utilization for the growth of “Helianthus annuus”.The kitchen waste from hostels was collected and degraded in a compost bin of dimensions 57X36.5X31.5 (cmXcmXcm). Various parameters of the compost were monitored at an interval of 15 days up to 60 days .The pH of the compost was found to be decreasing from 7.71 to 7.19 on the 60th day .Other parameters like total carbon%, nitrogen% , moisture, pH, C:N ratio and change in colour, texture and odour were also monitored. C:N ratio (%) was found to be decreasing from 55.6059 to 25.89609 (60th day). The moisture of the compost was maintained between 55-50%. The study was carried forward by planting sunflower plant (Helianthus annuus) in different ratios of compost and soil of 1:1 and 2:1 respectively and two soil samples without compost were used as control to study the growth. Their change in shoot length and thickness were observed. It was found that the sample containing compost and soil in the ratio 2:1 had the best growth with a difference between initial and final shoot length of 5.9 cm. This was followed by the sample containing compost and the soil in the ratio of 1:1 which showed the difference as 5.5 cm. This experimental study concludes that compost made in a compost bin is also efficient and can be used as bio-fertilizer to enhance the plant growth

    Leveraging Multiple Teachers for Test-Time Adaptation of Language-Guided Classifiers

    Full text link
    Recent approaches have explored language-guided classifiers capable of classifying examples from novel tasks when provided with task-specific natural language explanations, instructions or prompts (Sanh et al., 2022; R. Menon et al., 2022). While these classifiers can generalize in zero-shot settings, their task performance often varies substantially between different language explanations in unpredictable ways (Lu et al., 2022; Gonen et al., 2022). Also, current approaches fail to leverage unlabeled examples that may be available in many scenarios. Here, we introduce TALC, a framework that uses data programming to adapt a language-guided classifier for a new task during inference when provided with explanations from multiple teachers and unlabeled test examples. Our results show that TALC consistently outperforms a competitive baseline from prior work by an impressive 9.3% (relative improvement). Further, we demonstrate the robustness of TALC to variations in the quality and quantity of provided explanations, highlighting its potential in scenarios where learning from multiple teachers or a crowd is involved. Our code is available at: https://github.com/WeiKangda/TALC.git
    corecore